Install a new server with Terraform

  1. change your .devops/terraform/ec2.tf to add a new server instance, somethings like

    resource "aws_instance" "app_production" {
        instance_type = "t2.micro"
        ami = "ami-f4cc1de2"
        subnet_id = "${aws_subnet.subnet_1.id}"
    
        key_name = "${var.application_info["key_pair"]}"
        vpc_security_group_ids = [ "${aws_security_group.ec2_security_group.id}" ]
        associate_public_ip_address = true
    
        tags {
            Name = "Vukets Production Craft"
        }
    }
    
    • "app_production" is the name of the server, so you can call your server in terraform like ${aws_instance.app_production.public_ip}

    change your .devops/terraform/rds.tf to add a new RDS instance, somethings like

    # Production Craft DB Setup
    resource "aws_db_instance" "production" {
      allocated_storage    = 5
      engine               = "mariadb"
      engine_version       = "10.0.24"
      instance_class       = "db.t2.micro"
      storage_type         = "gp2"
      identifier           = "${var.application_info["name_lower"]}-production"
      name                 = "${var.application_info["name_lower"]}"
      username             = "vuketscraft"
      password             = "${var.db_password["production"]}"
      parameter_group_name = "default.mariadb10.0"
      multi_az             = false
      publicly_accessible  = true
      vpc_security_group_ids = ["${aws_security_group.db_security_group.id}"]
      db_subnet_group_name = "${aws_db_subnet_group.app.id}"
    }
    
  2. terraform plan then terraform apply

  3. SSH to your new AWS instance and get the full access via sudo su

  4. install Nginx with apt-get update then apt-get install nginx, or HERE

  5. Install Docker using ‘curl -sSL https://get.docker.com/ | sh’

  6. change docker permissions to be run by ubuntu user with ‘sudo usermod -aG docker ubuntu’

  7. Install Certbot following just the install section outlined HERE

  8. # This corrisponds to the port that your desired docker-container is running on
    upstream TODO_DOCKER_SERVICE_NAME {
      server 127.0.0.1:8080;
    }
    
    server {
      listen 80;
      listen [::]:80;
      server_name _;
    
      root "/var/www/html";
    
      location ~ /.well-known {
        allow all;
      }
    
      location / {
        proxy_pass         http://TODO_DOCKER_SERVICE_NAME;
        proxy_redirect     off;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
      }
    }
    

    replace TODOS with real information. Place this file in /etc/nginx/sites-enabled/ with and rename is [client-website-url].conf (As an examples our Jira server Conf file is named: jira.skyrocket.is.conf)

    • Delete the file /etc/nginx/sites-enabled/default
    • Restart NGINX using systemctl restart nginx
  9. Run the command: certbot certonly --webroot -w /var/www/html -d TODO_APPLICATION_URL -d www.TODO_APPLICATION_URL --post-hook="service nginx reload". Replace TODO’s where necessary, and any subdomains you wish to add to the certificate.

    make sure you see it says success.

  10. Run the command sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

  11. # from https://cipherli.st/
    # and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
    
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
    ssl_ecdh_curve secp384r1;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
    # Disable preloading HSTS for now.  You can use the commented out header line that includes
    # the "preload" directive if you understand the implications.
    #add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
    add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_certificate /etc/letsencrypt/live/TODO_APPLICATION_URL/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/TODO_APPLICATION_URL/privkey.pem;
    

    Place it in the servers /etc/nginx/snippets/ directory with the name ssl.conf

  12. # This is an example file of what should go onto the server
    
    # This corrisponds to the port that your desired docker-container is running on
    upstream TODO_DOCKER_SERVICE_NAME {
      server 127.0.0.1:8080;
    }
    
    # Listen on port 80 for www. then redirect to naked and https
    # .well-known is for letsencrypt
    server {
      listen 80;
      listen [::]:80;
      server_name TODO_APPLICATION_URL www.TODO_APPLICATION_URL;
    
      root "/var/www/html";
    
      location ~ /.well-known {
        allow all;
      }
    
      location / {
        return 301 https://TODO_APPLICATION_URL$request_uri;
      }
    }
    
    # If they come in on 443 and on www. redirect them
    server {
      listen 443 ssl http2;
      listen [::]:443 ssl http2;
      server_name www.TODO_APPLICATION_URL;
    
      # These should have been created using lets encrypt following this guide:
      # https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04
      include snippets/ssl.conf;
    
      return 301 https://TODO_APPLICATION_URL$request_uri;
    }
    
    # Here's the actual server block that we want.
    server {
      listen 443 ssl http2;
      listen [::]:443 ssl http2;
    
      # These should have been created using lets encrypt following this guide:
      # https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04
      include snippets/ssl.conf;
    
      server_name TODO_APPLICATION_URL;
    
      location ~ /.well-known {
        allow all;
      }
    
      location / {
        proxy_pass         http://TODO_DOCKER_SERVICE_NAME;
        proxy_redirect     off;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
        proxy_set_header   X-Forwarded-Proto https;
      }
    }
    

    Now that you have a certificate and DH group replace the conf you put into the ‘/etc/nginx/sites-available/’ directory with this file, remember to fill out all the TODO’s

    • Restart NGINX using ‘systemctl restart nginx’
    • Now you should see something in https://YOUR URL, even 502 Bad Gateway means you successfully install the certification.
  13. Run crontab -e, then add 15 3 * * * /usr/bin/certbot renew --quiet --renew-hook "/bin/systemctl reload nginx" to the end of the opened editor.

    This will ensure your certbot will run the renewl command at 3:15 am everyday.

    • certbot renew will check all certificates installed on the system and update any that are set to expire in less than thirty days.
    • --quiet tells Certbot not to output information nor wait for user input.
    • --renew-hook ... will reload Nginx to pick up the new certificate files, but only if a renewal has actually happened.
  14. Login into registry.gitlab.com in the server using docker login registry.gitlab.com. You can use your own gitlab credentials for this, or use the Skyrocket gitlab account

  15. ## named "docker-compose.yml"
    version: '2'
    services:
      app-production:
        image: TODO_REPOSITORY_IMAGE
        ports:
         - 8080:80
        environment:
          - DB_HOST=TODO_DATABASE_HOST
          - DB_USER=TODO_DATABASE_USER
          - DB_DATABASE=TODO_DATABASE_NAME
          - DB_PASSWORD=TODO_DATABASE_PASSWORD
    

    Copy it into the home directory (/home/ubuntu/) of the server. Fill in the correct information for image, by going to the gitlab repo, then to regisitory to get the image name. It should be something like registry.gitlab.com/skyrkt/scoutzoo/scoutzoo-craft/

    Then fill out the environment variables as needed. The ones in the examples are for a simple craft site.

    run docker-compose up -d

  16. ## named ".gitlab-ci.yml"
    variables:
      DOCKER_IMAGE: TODO_APPLICATION_DOCKER_IMAGE_LOCATION
      DOCKERFILE_LOCATION: ./
    
    stages:
      - build
      - test
      - deploy
    
    image: docker:dind
    
    ###############
    #   Builds    #
    ###############
    
    build-docker-latest:
      stage: build
      script:
        - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
        - docker build -t $DOCKER_IMAGE:latest $DOCKERFILE_LOCATION
        - docker push $DOCKER_IMAGE:latest
      only:
        - master
    
    build-docker-tag:
      stage: build
      script:
        - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
        - docker build -t $DOCKER_IMAGE:$CI_BUILD_TAG $DOCKERFILE_LOCATION
        - docker push $DOCKER_IMAGE:$CI_BUILD_TAG
      only:
        - tags
    
    ###############
    #    Test     #
    ###############
    
    lint:
      stage: test
      image: skyrkt/nginx-node:7.8.0
      script:
        - yarn
        - yarn build
      only:
        - /^feature.*$/
    
    ###############
    #   Deploy    #
    ###############
    
    deploy-production:
      stage: deploy
      image: webdevops/ssh
      environment:
        name: production
      script:
        - echo "$SSH_KEY" > key.pem
        - chmod 400 key.pem
        - ssh -o "StrictHostKeyChecking no" -i "key.pem" TODO_EC2_USER@COMPUTER "docker-compose pull app-production && docker-compose up --no-deps -d app-production"
      only:
        - master
    

    place it in the root of your project

    • app-production should be same as the server name defined in the docker-compose.yml
  17. Mannually run the docker script, like docker-compose pull app-production && docker-compose up --no-deps -d app-production when you first time install this server.

    • check if the docker & nginx works properly via curl localhost, if not working, check nginx
    • if you can see the content by shooting ip, it’s good, if it not work, check security group
  18. Copy the content out of the .pem file you used to SSH into the ec2 container. Add it as a variables called SSH_KEY in the gitlab repository CI/CD variables section. While here disable shared runners, and enable the Skyrocket CI

  19. add a new record at route53.tf and pointing to the new server.

  20. Trouble Shooting

    • if encounter premission deny error when docker-composer pull, try to run it in sudo su mode. if that is the case, your issue is login the gitlab with sodu role. you have to login to the gitlab register again with the normal permission.

Integrate GitLab CI

With the website running, we need to take care about the GitLab CI Integration

  1. understand the .gitlab-ci.yml in your project root folder

    variables:
      DOCKER_IMAGE: registry.gitlab.com/skyrkt/vukets-craft
      DOCKERFILE_LOCATION: ./
    
    stages:
      - build
      - test
      - deploy
    
    ###############
    #   Builds    #
    ###############
    
    build-docker-latest:
      stage: build
      image: skyrkt/dind-node:7.8.0
      script:
        - yarn
        - yarn build
        - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
        - docker build -t $DOCKER_IMAGE:latest $DOCKERFILE_LOCATION
        - docker push $DOCKER_IMAGE:latest
      only:
        - master
    
    build-docker-release:
      stage: build
      image: skyrkt/dind-node
      script:
        - yarn
        - yarn build
        - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
        - docker build -t $DOCKER_IMAGE:${CI_BUILD_REF_NAME#release/} $DOCKERFILE_LOCATION
        - docker push $DOCKER_IMAGE:${CI_BUILD_REF_NAME#release/}
      only:
        - /^release.*$/
    
    build-docker-tag:
      stage: build
      image: skyrkt/dind-node:7.8.0
      script:
        - yarn
        - yarn build
        - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
        - docker build -t $DOCKER_IMAGE:$CI_BUILD_TAG $DOCKERFILE_LOCATION
        - docker push $DOCKER_IMAGE:$CI_BUILD_TAG
      only:
        - tags
    
    ###############
    #    Test     #
    ###############
    
    lint:
      stage: test
      image: skyrkt/nginx-node:7.8.0
      script:
        - yarn
        - yarn build
      only:
        - /^feature.*$/
    
    ###############
    #   Deploy    #
    ###############
    
    deploy-production:
      stage: deploy
      image: webdevops/ssh
      environment:
        name: production
        url: https://vukets.com
      script:
        - echo "$SSH_KEY" > key.pem
        - chmod 400 key.pem
        - ssh -o "StrictHostKeyChecking no" -i "key.pem" ubuntu@ec2-54-209-184-18.compute-1.amazonaws.com "sudo docker-compose pull vukets-production && sudo docker-compose up --no-deps -d vukets-production"
        - echo "Deployed to production server"
      when: manual
      only:
        - /^release.*$/
    
    deploy-staging:
      stage: deploy
      image: webdevops/ssh
      environment:
        name: staging
        url: https://staging.vukets.com
      script:
        - echo "$SSH_KEY" > key.pem
        - chmod 400 key.pem
        - ssh -o "StrictHostKeyChecking no" -i "key.pem" ubuntu@ec2-54-227-112-50.compute-1.amazonaws.com "sudo docker-compose pull vukets-staging && sudo docker-compose up --no-deps -d vukets-staging"
        - echo "Deployed to staging server"
      only:
        - master
    
    • stage indicate the steps that gitlab pipeline gonna run
    • first, GitLab will build image to the registry. only tag specifies which branch to perform the task on
    • second, GitLab will run the test lint
    • third, GitLab will ssh to the server and pull the docker image and depoly
  2. Make sure the docker-compose.yml in the \home\ forlder has the correct image setting,

    (images should be pulled from the GitLab Registry)

    e.g. for staging environment, the image name might be somethings like registry.gitlab.com/skyrkt/vukets-craft:latest

    for production environment, the image name might be somethings like

    registry.gitlab.com/skyrkt/vukets-craft:release-1.0-latest

SSL with ACM

AWS provides Amazon Certification Manager(ACM) that allow us to request certification for free and implement it with our websites hosted with aws. Most information about ACM can be found here, and this article will illustrate some common issues we’ve met in Skyrocket.

  1. request certifcation

    Make sure someonet is able to receive verification email from ACM. To achieve the email you can make your email address as the connect info of domain registration.

  2. implement certfication

    Certification signed by ACM can only be used with load balancer (ELB) and cloudfront distribution. If you want SSL with single EC2 instance, reference this article.

    Under the EB configration, choose Load Balancer and choose SSL certificate ID.

    In this way, EBL will handle the HTTPS (433/tcp) requests and redirect traffic to specific EC2 instance on HTTP (80/tcp). So you don’t need to setup SSL on instance.

  3. issue you might meet

    • Fail deploy EB after configing LB

      You probably open the listening port on 443 from the “EC2->Load Balancer->port”, simply remove the 443 port there and redeploy again.

    • 301 redirection loop

      Traffic from ELB will always be HTTP (80/tcp), so your web server recive HTTP request and might redirect it to HTTPS, then ELB handle the request and redirect HTTP to web server… as a loop.

      To reslove that, just tell your web server that all the request are from HTTPS.

      Some workable Stackoverflow answer:

      #  Apcche	in .htaccess or httpd.conf
      SetEnvIfNoCase X-FORWARDED-PROTO "^https$" HTTPS
      
      #  Nginx
      fastcgi_param HTTPS on;
      

CloudFront

AWS CloudFront is a CDN provider that allow your static content get cached with all the AWS CloudFront server around the world so that your website get speed up.

Usually we create a cloudfront for specific S3 bucket that stroe assets (something like d2v2y58lsreu9v.cloudfront.net), and then create a CNAME for that (maybe assets.yyoga.ca), so we can use the content with the cloudfront.

However, we can also make cloudfront to cache a whole website by setting the original domain as the real server address, like EC2 instance or a Elastic Beanstalk Load Banlancer. In this way the whole website is cached every 24 hours by default. If you’re working on a website that has dynamic content or a CMS, you can set the “behavior” to forward all the request & header to the original server (means NO cache).

Notice:

  • Until Aug 2017, you have to add the “host” of header into the behavior whitelist if you want to cache the whole website. otherwise it will not working.

  • Usually, with other CDNs, if you want to cache the whole website, you need three steps:

    • copy the whole website to other server like “backup.domain.com”.
    • set CDN origin server to that “backup.domain.com”, which means it will grab orginal data from that backup server. now the CDN site might called “cdn.domain.com”
    • give really domain “domain.com” a CNAME points to the “cdn.domain.com”, DONE

    Howver, with the cloudfont, we dont need to do that at all. only set orginal server to the EC2/EB, then give a A recrod of “domain.com” to an alias of the cloudfront cdn.